View Synthesis: LiDAR Camera versus Depth Estimation

نویسندگان

چکیده

Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from set of multiview images and corresponding depth maps. However, this requires an accurate map estimation that incurs high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using high-class computer. LiDAR cameras thus be alternative solution to real-time DIBR ap- plications. We compare the quality low-cost camera, Intel Realsense L515 calibrated configured adequately, with MPEG-I’s View Synthesizer (RVS). In IV-PSNR, camera reaches 32.2dB synthesis 15cm baseline 40.3dB 2cm baseline. Though outperforms 4.2dB, latter provides better quality-performance trade- off. visual inspection demonstrates LiDAR’s views have slightly higher than most tested low-texture scene areas, except for object borders. Overall, we highly recommend advanced methods (like DERS) applications. Neverthe- less, delicate calibration multiple tools further exposed paper.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Depth Estimation for View Synthesis in Multimedia Video Coding

The compression of multiview video in an end-to-end 3D system is required to reduce the amount of visual information. Since multiple cameras usually have a common field of view, high compression ratios can be achieved if both the temporal and inter-view redundancy are exploited. View synthesis prediction is a new coding tool for multiview video that essentially generates virtual views of a scen...

متن کامل

BHAVSAR, RAJAGOPALAN: DEPTH ESTIMATION WITH A PRACTICAL CAMERA 1 Depth estimation with a practical camera

Given an off-the-shelf camera, one has the freedom to move the camera or play around with its intrinsic parameters such as zoom or aperture settings. We propose a framework for depth estimation from a set of calibrated images, captured under general camera motion and parameter variation. Our framework considers the practical trade-offs in a camera and hence essentially generalizes the more cons...

متن کامل

Motion Guided LIDAR-camera Autocalibration and Accelerated Depth Super Resolution

In this work we propose a novel motion guided method for automatic and targetless calibration of a LiDAR and camera and use the LiDAR points projected on the image for real-time super-resolution depth estimation. The calibration parameters are estimated by optimizing a cost function that penalizes the difference in the motion vectors obtained from LiDAR and camera data separately. For super-res...

متن کامل

Camera Arrangement in Visual 3D Systems using Iso-disparity Model to Enhance Depth Estimation Accuracy

In this paper we address the problem of automatic arrangement of cameras in a 3D system to enhance the performance of depth acquisition procedure. Lacking ground truth or a priori information, a measure of uncertainty is required to assess the quality of reconstruction. The mathematical model of iso-disparity surfaces provides an efficient way to estimate the depth estimation uncertainty which ...

متن کامل

Depth Estimation with a Practical Camera

We propose a framework for depth estimation from a set of calibrated images, captured with a moving camera with varying parameters. Our framework respects the physical limits of the camera, and considers various effects such as motion parallax, defocus blur, zooming and occlusions which are often unavoidable. In fact, the stereo [1] and the depth from defocus [2] are essentially special cases i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computer Science Research Notes

سال: 2021

ISSN: ['2464-4625', '2464-4617']

DOI: https://doi.org/10.24132/csrn.2021.3101.35